Online search has become a significant activity in the daily lives of individuals throughout much of the world. The almost instantaneous availability of billions of web pages has caused a revolution in the way people seek information. Despite the increasing importance of online search behavior in decision. making and problem solving, very little is known about why people stop searching for information online. In this paper, we review the literature concerning online search and cognitive stopping rules, and then describe specific types of information search tasks. Based on this theoretical development, we generated hypotheses and conducted an experiment with 115 participants each performing three search tasks on the web. Our findings show that people utilize a number of stopping rules to terminate search, and that the stopping rule used depends on the type of task performed. Implications for online information search theory and practice are discussed.
We examine the case of software reuse as a disruptive information technology innovation (i.e., one that requires changes in the architecture of work processes) in software development organizations. Using theories of conflict, coordination, and learning, we develop a model to explain peer-to-peer conflicts that are likely to accompany the introduction of disruptive technologies and how appropriately devised managerial interventions (e.g., coordination mechanisms and organizational learning practices) can lessen these conflicts. A study of software reuse programs in four organizations was conducted to assess the validity of the model. Qualitative and quantitative analyses of the data obtained showed that companies that had implemented such managerial interventions experienced greater success with their software reuse programs. Implications for theory and practice are discussed.
Understanding the cognitive activities of analysts during information requirements determination (IRD) has been recognized as a key indicator of IRD success. The research presented here examines one such cognitive activity: analysts' determination of the sufficiency of information gathered during the elicitation of requirements. Research in behavioral decision-making has identified various heuristics, or stopping rules, that are used to gauge the sufficiency of the information obtained and to terminate information acquisition. Despite the fact that analysts undoubtedly employ such stopping rules in requirements elicitation, no research has studied this phenomenon. In the present research, we present a classification of stopping rules appropriate for information gathering problems. Stopping-rule use was identified for 54 practicing systems analysts participating in a requirements determination problem in a laboratory setting. Results indicated that analyst experience influences the application of specific cognitive stopping rules, and that the use of these stopping rules has an impact on requirements determination outcomes. In addition, the use of certain stopping rules resulted in greater quantity and completeness of requirements elicited from users. Theoretical implications for the elicitation of information and practical implications for the training of systems analysts are discussed.
Eliciting requirements from users and other stakeholders is of central importance to information systems development. Despite this importance, surprisingly little research has measured the effectiveness of various requirements elicitation techniques. The present research first discusses theory relevant to information requirements determination in general and elicitation in particular. We then develop a model of the requirements elicitation process. This model and its underlying theory were then used to construct a new requirements elicitation prompting technique. To provide a context for testing the relative effectiveness of the new technique, two other questioning methodologies were also operationalized as prompting techniques: (1) the interrogatories technique, which involves asking "who," "what," "when," "where," "how," and "why" questions, and (2) a semantic questioning scheme, which involves asking questions based on a theoretical model of knowledge structures. To measure the usefulness of the prompting techniques in eliciting requirements, a set of generic requirements categories was adapted from previous research to capture requirements evoked by users. The effectiveness of the three methods in eliciting requirements for a software application was then tested in an experiment with users. Results showed that the new prompting technique elicited a greater quantity of requirements from users than did the other two techniques. Implications of the findings for research and systems analysis practice are discussed.